Hessian Informed Mirror Descent
نویسندگان
چکیده
Inspired by the recent paper (L. Ying, Journal of Scientific Computing, 84, 1–14 (2020), we explore relationship between mirror descent and variable metric method. When in decent is induced a convex function, whose Hessian close to objective this method enjoys both robustness from superlinear convergence for Newton type methods. applied linearly constrained minimization problem, prove global local convergence, continuous discrete settings. As applications, compute Wasserstein gradient flows Cahn-Hillard equation with degenerate mobility. formulating these problems using minimizing movement scheme respect metric, our algorithm offers fast speed underlying optimization problem while maintaining total mass bounds solution.
منابع مشابه
Composite Objective Mirror Descent
We present a new method for regularized convex optimization and analyze it under both online and stochastic optimization settings. In addition to unifying previously known firstorder algorithms, such as the projected gradient method, mirror descent, and forwardbackward splitting, our method yields new analysis and algorithms. We also derive specific instantiations of our method for commonly use...
متن کاملMirror Descent Based Database Privacy
In this paper, we focus on the problem of private database release in the interactive setting: a trusted database curator receives queries in an online manner for which it needs to respond with accurate but privacy preserving answers. To this end, we generalize the IDC (Iterative Database Construction) framework of [15,13] that maintains a differentially private artificial dataset and answers i...
متن کاملMirror Descent Search and Acceleration
In recent years, attention has been focused on the relationship between black box optimization and reinforcement learning. Black box optimization is a framework for the problem of finding the input that optimizes the output represented by an unknown function. Reinforcement learning, by contrast, is a framework for finding a policy to optimize the expected cumulative reward from trial and error....
متن کاملMirror Descent for Metric Learning
Most metric learning methods are characterized by diverse loss functions and projection methods, which naturally begs the question: is there a wider framework that can generalize many of these methods? In addition, ever persistent issues are those of scalability to large data sets and the question of kernelizability. We propose a unified approach to Mahalanobis metric learning: an online regula...
متن کاملRobust Blind Deconvolution via Mirror Descent
We revisit the Blind Deconvolution problem with a focus on understanding its robustness and convergence properties. Provable robustness to noise and other perturbations is receiving recent interest in vision, from obtaining immunity to adversarial attacks to assessing and describing failure modes of algorithms in mission critical applications. Further, many blind deconvolution methods based on ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Scientific Computing
سال: 2022
ISSN: ['1573-7691', '0885-7474']
DOI: https://doi.org/10.1007/s10915-022-01933-5